1,268 research outputs found

    Sample-path large deviations for tandem and priority queues with Gaussian inputs

    Get PDF
    This paper considers Gaussian flows multiplexed in a queueing network. A single node being a useful but often incomplete setting, we examine more advanced models. We focus on a (two-node) tandem queue, fed by a large number of Gaussian inputs. With service rates and buffer sizes at both nodes scaled appropriately, Schilder's sample-path large-deviations theorem can be applied to calculate the asymptotics of the overflow probability of the second queue. More specifically, we derive a lower bound on the exponential decay rate of this overflow probability and present an explicit condition for the lower bound to match the exact decay rate. Examples show that this condition holds for a broad range of frequently used Gaussian inputs. The last part of the paper concentrates on a model for a single node, equipped with a priority scheduling policy. We show that the analysis of the tandem queue directly carries over to this priority queueing system.Comment: Published at http://dx.doi.org/10.1214/105051605000000133 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    GPS queues with heterogeneous traffic classes

    Get PDF
    We consider a queue fed by a mixture of light-tailed and heavy-tailed traffic. The two traffic classes are served in accordance with the generalized processor sharing (GPS) discipline. GPS-based scheduling algorithms, such as weighted fair queueing (WFQ), have emerged as an important mechanism for achieving service differentiation in integrated networks. We derive the asymptotic workload behavior of the light-tailed class for the situation where its GPS weight is larger than its traffic intensity. The GPS mechanism ensures that the workload is bounded above by that in an isolated system with the light-tailed class served in isolation at a constant rate equal to its GPS weight. We show that the workload distribution is in fact asymptotically equivalent to that in the isolated system, multiplied with a certain pre-factor, which accounts for the interaction with the heavy-tailed class. Specifically, the pre-factor represents the probability that the heavy-tailed class is backlogged long enough for the light-tailed class to reach overflow. The results provide crucial qualitative insight in the typical overflow scenario

    Investigating cellular electroporation using planar membrane models and miniaturized devices

    Get PDF
    This thesis focuses on increasing our understanding of the electroporation process. Electroporation is a technique employed to introduce foreign molecules into cells that can normally not pass the cell membrane. By applying a short but high electric field, pores appear in the membrane through which these molecules can enter the cell. This process is crucial for a number of biotechnological and medical applications (such as drug delivery, particle delivery and gene transfection). This technique is studied with model systems for the cell membrane (bilayer lipid membranes or BLMs) as well as in cells. The membrane models are here employed to study the effect of the membrane composition on the pore formation process. The composition is altered by the addition of different types of phospholipids, cholesterol and proteins and the complexity of the models is increased from binary via ternary to quaternary systems. The results show that membrane composition can have a large effect on the potential required to form pores in the membranes. In addition to this, a microfluidic system for BLM experimentation is developed. In this device, the BLMs are created vertically, allowing for combining electrical and optical measurements. This is a major advantage over a conventional system where only electrical analysis is feasible. The main purpose of this device is as a platform for protein studies for drug screening purposes. Lastly, a cell monolayer electroporation device is employed to verify the hypothesis from the previous results with BLMs that polarized cells become electroporated at a different applied potential than non-polarized cells. In this device, cells are grown onto a layer of hydrogel on top of an electrode substrate or onto a bare electrode substrate, to induce membrane polarization or not

    Evolution of the luminosity-to-halo mass relation of LRGs from a combined SDSS-DR10+RCS2 analysis

    Get PDF
    We study the evolution of the luminosity-to-halo mass relation of Luminous Red Galaxies (LRGs). We select a sample of 52 000 LOWZ and CMASS LRGs from the Baryon Oscillation Spectroscopic Survey (BOSS) SDSS-DR10 in the ~450 deg^2 that overlaps with imaging data from the second Red-sequence Cluster Survey (RCS2), group them into bins of absolute magnitude and redshift and measure their weak lensing signals. The source redshift distribution has a median of 0.7, which allows us to study the lensing signal as a function of lens redshift. We interpret the lensing signal using a halo model, from which we obtain the halo masses as well as the normalisations of the mass-concentration relations. We find that the concentration of haloes that host LRGs is consistent with dark matter only simulations once we allow for miscentering or satellites in the modelling. The slope of the luminosity-to-halo mass relation has a typical value of 1.4 and does not change with redshift, but we do find evidence for a change in amplitude: the average halo mass of LOWZ galaxies increases by 25_{-14}^{+16} % between z=0.36 and 0.22 to an average value of 6.43+/-0.52 x 10^13 h70^-1 Msun. If we extend the redshift range using the CMASS galaxies and assume that they are the progenitors of the LOWZ sample, we find that the average mass of LRGs increases by 80^{+39}_{-28} % between z=0.6 and 0.2Comment: 20 pages, 11 figures, accepted for publication in A&

    Volume currents in forward and inverse MEG simulations using realistic head models

    Get PDF
    technical reportMany magnetoencephalography (MEG) forward and inverse simulation models employ spheres, a singular shape which does not require consideration of volume currents. With more realistic, inhomogenous, anisotropic, non-spherical head models, volume currents cannot be ignored. We verify the accuracy of the finite element methods in MEG simulations by comparing its results for a sphere containing dipoles to those obtained from the analytic solution. We then use the finite element method to show that in a realistic model, the magnetic field normal to the MEG detector due to volume currents often has a magnitude on the same order of greater than the magnitude of the primary magnetic field from the dipole. Forward and inverse MEG simulations using the realistic model demonstrate the disparity in results between calculations containing volume currents and those without volume currents. Volume currents should be included in any accurate calculation of MEG results, whether they be for a forward or inverse simulation

    Elastic Multi-resource Network Slicing: Can Protection Lead to Improved Performance?

    Full text link
    In order to meet the performance/privacy requirements of future data-intensive mobile applications, e.g., self-driving cars, mobile data analytics, and AR/VR, service providers are expected to draw on shared storage/computation/connectivity resources at the network "edge". To be cost-effective, a key functional requirement for such infrastructure is enabling the sharing of heterogeneous resources amongst tenants/service providers supporting spatially varying and dynamic user demands. This paper proposes a resource allocation criterion, namely, Share Constrained Slicing (SCS), for slices allocated predefined shares of the network's resources, which extends the traditional alpha-fairness criterion, by striking a balance among inter- and intra-slice fairness vs. overall efficiency. We show that SCS has several desirable properties including slice-level protection, envyfreeness, and load driven elasticity. In practice, mobile users' dynamics could make the cost of implementing SCS high, so we discuss the feasibility of using a simpler (dynamically) weighted max-min as a surrogate resource allocation scheme. For a setting with stochastic loads and elastic user requirements, we establish a sufficient condition for the stability of the associated coupled network system. Finally, and perhaps surprisingly, we show via extensive simulations that while SCS (and/or the surrogate weighted max-min allocation) provides inter-slice protection, they can achieve improved job delay and/or perceived throughput, as compared to other weighted max-min based allocation schemes whose intra-slice weight allocation is not share-constrained, e.g., traditional max-min or discriminatory processor sharing

    Convergence and Divergence in Attachment Style Across Male and Female College Students\u27 Friendships and Romantic Relationships

    Get PDF
    Attachment representations in friendship and romantic relationship contexts were examined in a sample of 398 college students. Analyses examined patterns of attachment style in both relationship contexts, divergence and convergence in attachment style, and links between attachment representations and negative peer and romantic relationship experiences (i.e., relational and physical victimization and betrayal). The majority of participants reported more secure attachment representations, relative to preoccupied or dismissing attachment. However, analysis of biological sex indicated that males reported more dismissing attachment styles with both friends and romantic partners, relative to females. Additionally, significant links were observed between negative peer and romantic relationship experiences and attachment representations, in theoretically consistent directions

    Intrinsic alignment of redMaPPer clusters: cluster shape-matter density correlation

    Get PDF
    We measure the alignment of the shapes of galaxy clusters, as traced by their satellite distributions, with the matter density field using the public redMaPPer catalogue based on Sloan Digital Sky Survey–Data Release 8 (SDSS-DR8), which contains 26 111 clusters up to z ∼ 0.6. The clusters are split into nine redshift and richness samples; in each of them, we detect a positive alignment, showing that clusters point towards density peaks. We interpret the measurements within the tidal alignment paradigm, allowing for a richness and redshift dependence. The intrinsic alignment (IA) amplitude at the pivot redshift z = 0.3 and pivot richness λ = 30 is AgenIA=12.6+1.5−1.2 AIAgen=12.6−1.2+1.5 . We obtain tentative evidence that the signal increases towards higher richness and lower redshift. Our measurements agree well with results of maxBCG clusters and with dark-matter-only simulations. Comparing our results to the IA measurements of luminous red galaxies, we find that the IA amplitude of galaxy clusters forms a smooth extension towards higher mass. This suggests that these systems share a common alignment mechanism, which can be exploited to improve our physical understanding of IA

    Lentigo maligna and radiotherapy.

    Get PDF
    The first part of the paper is devoted to a transient analysis of traffic generated by bursty sources. These sources are governed by a modulating process, whose state determines the traffic rate at which the source transmits. The class of modulating processes contains e.g. on/off traffic sources with general on and off times (but is considerably broader). We focus on the probability of extreme fluctuations of the resulting traffic rate, or more precisely, we determine the probability of the number of sources being in the on state reaching a certain threshold, given a measurement of the number of sources in the on state tt units of time ago. In particular, we derive large deviations asymptotics of this probability when the number of sources is large. These asymptotics are numerically manageable, and it is empirically verified that they lead to an overestimation of the probability of our interest. The analysis is extended to alternative measurement procedures. These procedures allow to take into account for instance more historic measurements than just one, possibly combined with an exponential weighting of these measurements. In the second part of the paper, we apply the asymptotic calculation methods to gain insight into the feasibility of measurement-based admission control (MBAC) algorithms for ATM or IP networks. These algorithms attempt to regulate the network's load (to provide the customers with a sufficient Quality of Service), and at the same time achieve an acceptable utilization of the resources. An MBAC algorithm may base acceptance or rejection of a new request on the measured momentary load imposed on the switch or router; if this load is below a given threshold, the source can be admitted. We investigate whether such a scheme is robust under the possible stochastic properties of the traffic offered. Both the burst level (i.e., the distribution of the on and off times of the sources) and the call level (particularly the distribution of the call duration) are taken into account. Special attention is paid to the influence of the bursts, silences, or call durations having a distribution with a `heavy tail'
    corecore